Evanston
OpenAI close to finalizing 40 billion SoftBank-led funding
OpenAI is close to finalizing a 40 billion ( 6 trillion) funding round led by SoftBank Group -- with investors including Magnetar Capital, Coatue Management, Founders Fund and Altimeter Capital Management in talks to participate, according to people familiar with the matter. Magnetar Capital -- an Evanston, Illinois-based hedge fund -- could contribute up to 1 billion, according to multiple people, all of whom asked not to be identified because the information is private. The artificial intelligence developer's funding round would be the largest of all time, according to data compiled by research firm PitchBook. The deal is set to value the company at 300 billion including dollars raised -- almost double the ChatGPT maker's previous valuation of 157 billion from when it raised money in October.
Mamba-based Deep Learning Approaches for Sleep Staging on a Wireless Multimodal Wearable System without Electroencephalography
Zhang, Andrew H., He-Mo, Alex, Yin, Richard Fei, Li, Chunlin, Tang, Yuzhi, Gurve, Dharmendra, Ghahjaverestan, Nasim Montazeri, Goubran, Maged, Wang, Bo, Lim, Andrew S. P.
Study Objectives: We investigate using Mamba-based deep learning approaches for sleep staging on signals from ANNE One (Sibel Health, Evanston, IL), a minimally intrusive dual-sensor wireless wearable system measuring chest electrocardiography (ECG), triaxial accelerometry, and temperature, as well as finger photoplethysmography (PPG) and temperature. Methods: We obtained wearable sensor recordings from 360 adults undergoing concurrent clinical polysomnography (PSG) at a tertiary care sleep lab. PSG recordings were scored according to AASM criteria. PSG and wearable sensor data were automatically aligned using their ECG channels with manual confirmation by visual inspection. We trained Mamba-based models with both convolutional-recurrent neural network (CRNN) and the recurrent neural network (RNN) architectures on these recordings. Ensembling of model variants with similar architectures was performed. Results: Our best approach, after ensembling, attains a 3-class (wake, NREM, REM) balanced accuracy of 83.50%, F1 score of 84.16%, Cohen's $\kappa$ of 72.68%, and a MCC score of 72.84%; a 4-class (wake, N1/N2, N3, REM) balanced accuracy of 74.64%, F1 score of 74.56%, Cohen's $\kappa$ of 61.63%, and MCC score of 62.04%; a 5-class (wake, N1, N2, N3, REM) balanced accuracy of 64.30%, F1 score of 66.97%, Cohen's $\kappa$ of 53.23%, MCC score of 54.38%. Conclusions: Deep learning models can infer major sleep stages from a wearable system without electroencephalography (EEG) and can be successfully applied to data from adults attending a tertiary care sleep clinic.
Hybrid Primal Sketch: Combining Analogy, Qualitative Representations, and Computer Vision for Scene Understanding
Forbus, Kenneth D., Chen, Kezhen, Xu, Wangcheng, Usher, Madeline
One of the purposes of perception is to bridge between sensors and conceptual understanding. Marr's Primal Sketch combined initial edge-finding with multiple downstream processes to capture aspects of visual perception such as grouping and stereopsis. Given the progress made in multiple areas of AI since then, we have developed a new framework inspired by Marr's work, the Hybrid Primal Sketch, which combines computer vision components into an ensemble to produce sketch-like entities which are then further processed by CogSketch, our model of high-level human vision, to produce both more detailed shape representations and scene representations which can be used for data-efficient learning via analogical generalization. This paper describes our theoretical framework, summarizes several previous experiments, and outlines a new experiment in progress on diagram understanding.
The teeniest robot in the world is a jumping crab
Engineers from Northwestern University in Evanston, Illinois have developed the smallest walking robot ever, and it's a crab. The half-millimeter robot is modeled after a peekytoe crab and is just the latest iteration in a long line of small robots created by the researchers. The goal for creating a bot so small is to move towards more practical uses of the technology and gaining entry to more hard to reach, tightly confined spaces.
Powerful holographic camera is developed that can see through almost ANYTHING
A powerful holographic camera has been developed by scientists, and it is able to see though almost anything, including corners, fog, and even human flesh. The device, built by researchers from Northwestern University in Evanston, Illinois, uses a technique called'synthetic wavelength holography'. It works by indirectly scattering light onto hidden objects, which then scatters again and travels back to a camera, where AI is used to reconstruct the original object. The team say it is a decade away from being commercially available, but when it is the technology could be used in cars, CCTV and even as a medical scanner. One example could be to replace the use of an endoscope in colonoscopy, instead gathering the light waves to see around the folds inside the intestines.
Is AI A Race To Be Won?
J. Frank Duryea wins one of the first American automobiles races from Chicago to Evanston Illinois ... [ ] in the car he co-created. The artificial intelligence race grabs headlines and the narrative seldom varies: On the world stage, it's U.S. innovation versus China's speed. In business, senior leadership at all sorts of organizations are romanced by the prospect of winning the AI race. Some believe it's just a matter of greater investment and a policy change or two. This begs the question: Is AI a race to be won?
Aquatic robot inspired by sea creatures walks, rolls, transports cargo
Soft material behaves like a robot, moving with precision and agility without complex hardware, hydraulics or electricity Embedded nickel skeleton enables robot to respond to external magnetic fields Life-like robotic materials could someday be used as'smart' microscopic systems for production of fuels and drugs, environmental cleanup or transformative medical procedures EVANSTON, Ill. It can walk at human speed, pick up and transport cargo to a new location, climb up hills and even break-dance to release a particle. Instead, it is activated by light and walks in the direction of an external rotating magnetic field. The researchers imagine customizing the movements of miniature robots to help catalyze different chemical reactions and then pump out the valuable products. The robots also could be molecularly designed to recognize and actively remove unwanted particles in specific environments, or to use their mechanical movements and locomotion to precisely deliver bio-therapeutics or cells to specific tissues.
How Photos of Your Kids Are Powering Surveillance Technology
One day in 2005, a mother in Evanston, Ill., joined Flickr. Then she more or less forgot her account existed. Years later, their faces are in a database that's used to test and train some of the most sophisticated artificial intelligence systems in the world. The pictures of Chloe and Jasper Papa as kids are typically goofy fare: grinning with their parents; sticking their tongues out; costumed for Halloween. None of them could have foreseen that 14 years later, those images would reside in an unprecedentedly huge facial-recognition database called MegaFace.
How Photos of Your Kids Are Powering Surveillance Technology
One day in 2005, a mother in Evanston, Ill., joined Flickr. Then she more or less forgot her account existed. Years later, their faces are in a database that's used to test and train some of the most sophisticated artificial intelligence systems in the world. The pictures of Chloe and Jasper Papa as kids are typically goofy fare: grinning with their parents; sticking their tongues out; costumed for Halloween. None of them could have foreseen that 14 years later, those images would reside in an unprecedentedly huge facial-recognition database called MegaFace.
There is no general AI: Why Turing machines cannot pass the Turing test
Since 1950, when Alan Turing proposed what has since come to be called the Turing test, the ability of a machine to pass this test has established itself as the primary hallmark of general AI. To pass the test, a machine would have to be able to engage in dialogue in such a way that a human interrogator could not distinguish its behaviour from that of a human being. AI researchers have attempted to build machines that could meet this requirement, but they have so far failed. To pass the test, a machine would have to meet two conditions: (i) react appropriately to the variance in human dialogue and (ii) display a human-like personality and intentions. We argue, first, that it is for mathematical reasons impossible to program a machine which can master the enormously complex and constantly evolving pattern of variance which human dialogues contain. And second, that we do not know how to make machines that possess personality and intentions of the sort we find in humans. Since a Turing machine cannot master human dialogue behaviour, we conclude that a Turing machine also cannot possess what is called ``general'' Artificial Intelligence. We do, however, acknowledge the potential of Turing machines to master dialogue behaviour in highly restricted contexts, where what is called ``narrow'' AI can still be of considerable utility.